1
From Direct API Calls to LangChain Abstraction
AI010 Lesson 5
00:00

Beyond the Raw Request

When starting with Large Language Models (LLMs), developers typically use direct API calls (like the OpenAI Python library) to send a prompt and receive a completion. While functional, this approach becomes unmanageable as applications scale.

The Problem of Statelessness

Large Language Models are inherently stateless. Every time you send a message, the model "forgets" who you are and what you previously said. Each interaction is a blank slate. To maintain a conversation, you must manually pass the entire history back to the model every single time.

The LangChain Solution

LangChain introduces the ChatOpenAI model wrapper. This isn't just a wrapper for the sake of itโ€”it is the foundation for modularity. By abstracting the model call, we can later swap models, inject memory, and use templates without rewriting our entire codebase.

The Pirate Scenario
Imagine a customer email written in "Pirate-style" slang. To translate this into a formal corporate response, a direct API call requires hardcoding the instructions. With LangChain, we separate the "Style" (Pirate vs. Formal) from the "Content" (The Email) using abstraction.
main.py
TERMINAL bash โ€” 80x24
> Ready. Click "Run" to execute.
>
Question 1
Why do we say LLMs are "stateless"?
They do not have access to the internet.
They cannot generate the same response twice.
They do not inherently remember previous messages in a conversation.
They are only capable of processing text, not data states.
Challenge: Initialize ChatOpenAI
Solve the problem below.
You are building a creative writing assistant and need to initialize your first LangChain model.

Your task is to create a ChatOpenAI instance named my_llm with a temperature of 0.7 to allow for more creative (non-deterministic) responses.
Task
Write the Python code to import and initialize the model.
Solution:
from langchain_openai import ChatOpenAI
my_llm = ChatOpenAI(temperature=0.7)